Article

Children use non-verbal cues to learn new words from robots as well as people

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Social robots are innovative new technologies that have considerable potential to support children’s education as tutors and learning companions. Given this potential, it behooves us to study the mechanisms by which children learn from social robots, as well as the similarities and differences between children’s learning from robots as compared to human partners. In the present study, we examined whether young children will attend to the same nonverbal social cues from a robot as from a human partner during a word learning task, specifically gaze and bodily orientation to an unfamiliar referent. Thirty-six children viewed images of unfamiliar animals with a human and with a robot. The interlocutor (human or robot) oriented toward, and provided names for, some of the animals, and children were given a posttest to assess their recall of the names. We found that children performed equally well on the recall test whether they had been provided with names by the robot or by the human. Moreover, in each case, their performance was constrained by the spatial distinctiveness of nonverbal orientation cues available to determine which animal was being referred to during naming.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A recent comprehensive review of social robots in education [6] states, "Many studies using robots do not consider learning in comparison with an alternative, such as computer-based or human tutoring, but instead against other versions of the same robot with different behaviors... Comparisons between robots and humans are rare in the literature, so no meta-analysis data were available to compare the cognitive learning effect size". Few RALL studies [10,32,35,42,47,48] employed experiments and surveys that compared robots and humans; however, they did not achieve clear results, i.e., they did not have statistical tests, lacked statistical information, and did noten robots and humans are rare in compare learning outcomes. ...
... Only a few studies compared RALL systems and human tutors in RALL. These studies compared robot and adult tutors in children's L1 learning [47,48], robot and child peers in children's L2 learning [35,42], and teleoperated robot and human facilitators in L2 learning for adults [32]. In addition, one study conducted an exploratory analysis to compare the robot and human tutors [10]. ...
... The results showed no significant differences in the number of nouns learned by the children (means, statistics, and effect sizes for each condition are not stated). Westlund et al. [48] compared the conditions of learning with a robot and an adult in a similar task. This experiment also showed no significant differences in the children's recall (effect size not stated). ...
Article
Full-text available
This study explores how much current mainstream Robot-Assisted Language Learning (RALL) systems produce outcomes compared to human tutors instructing a typical English conversation lesson. To this end, an experiment was conducted with 26 participants divided in RALL (14 participants) and human tutor (12 participants) groups. All participants took a pre-test on the first day, followed by 30 min of study per day for 7 days, and 3 post-tests on the last day. The test results indicated that the RALL group considerably improved lexical/grammatical error rates and fluency of speech compared to that for the human tutor group. The other characteristics, such as rhythm, pronunciation, complexity, and task achievement of speech did not indicate any differences between the groups. The results suggested that exercises with the RALL system enabled participants to commit the learned expressions to memory, whereas those with human tutors emphasized on communication with the participants. This study demonstrated the benefits of using RALL systems that can work well in lessons that human tutors find hard to teach.
... When coding the corpus, we see that 29 of the 30 papers are in the cognitive learning domain. One paper is in the social domain [58], one paper is 255 a combination of the cognitive and physical learning domains [69], two papers are a combination of cognitive and emotional [50,66], and six papers are a combination of the cognitive and social domain [60,56,57,47,55,59]. and three papers on programming [42,53,56], language three papers [44,60,70], social skills [58], other more narrow subjects such as sustainability [45], olive oil production [51,57], and attention skills [50]. ...
... When coding the corpus, we see that 29 of the 30 papers are in the cognitive learning domain. One paper is in the social domain [58], one paper is 255 a combination of the cognitive and physical learning domains [69], two papers are a combination of cognitive and emotional [50,66], and six papers are a combination of the cognitive and social domain [60,56,57,47,55,59]. and three papers on programming [42,53,56], language three papers [44,60,70], social skills [58], other more narrow subjects such as sustainability [45], olive oil production [51,57], and attention skills [50]. ...
... We found 10 experimental studies [52,61,56,48,60,51,57,70,58,65], in 290 which some qualitative methods were also partly used, and one non-experimental study based solely on a quantitative approach [42] to assess the learning outcome in their study. All of these 11 studies presented a hypothesis related to learning in children and tested the hypothesised effect on learning in the study. ...
... The learning is incidental, in that the task is not presented to the participants as a language learning task. While many previous studies [40,69,74] in RALL have used teleoperation methodologies (Wizard-of-Oz (WoZ)), our system is entirely autonomous. We think this is important, as teleoperated systems might give over-optimistic results, and most of the RALL's real-world implication benefits are when used as autonomous systems. ...
... During the past ten years, there has been a leap in the number of RALL studies. Most of these studies use anthropomorphic robots, which mostly take the social role of teacher/tutor [62], peer [40], or learner [68]. Although these studies cover many aspects of language learning, vocabulary acquisition 1 has received the majority of attention among all [58,70]. ...
... However, the advantage is not clear when comparing robots to other technologies. For the basic words, the learning outcome was similar whether using an iPad [74], human teacher [40,74], computers [31], virtual agents [14], or a social robot. Another review [70] covers research between 2004 and 2018 and includes 33 studies, of which 13 are about word learning. ...
Conference Paper
Full-text available
The use of social robots as a tool for language learning has been studied quite extensively recently. Although their effectiveness and comparison with other technologies are well studied, the effects of the robot's appearance and the interaction setting have received less attention. As educational robots are envisioned to appear in household or school environments, it is important to investigate how their designed persona or interaction dynamics affect learning outcomes. In such environments, children may do the activities together or alone or perform them in the presence of an adult or another child. In this regard, we have identified two novel factors to investigate: the robot's perceived age (adult or child) and the number of learners interacting with the robot simultaneously (one or two). We designed an incidental word learning card game with the Furhat robot and ran a between-subject experiment with 75 middle school participants. We investigated the interactions and effects of children's word learning outcomes, speech activity, and perception of the robot's role. The results show that children who played alone with the robot had better word retention and anthropomorphized the robot more, compared to those who played in pairs. Furthermore, unlike previous findings from human-human interactions, children did not show different behaviors in the presence of a robot designed as an adult or a child. We discuss these factors in detail and make a novel contribution to the direct comparison of collaborative versus individual learning and the new concept of the robot's age.
... The most frequent context of learning is school 24 papers, followed by public knowledge institutions such as e.g., museums and science centers 7 papers, lab 7 papers, after school and summer camps 6 papers, home 6 papers, kindergarten and preschool two papers [35,333], and one for therapeutic center [30]. Several papers did not specify context. ...
... Five papers use machine learning (ML) [322,323,330,350,351]. Three papers use robots [138,215] or social robots [333]. Four papers use Scratch [234,272,312,350], while [121] use a different block coding environment, and [170] a visual programming environment. ...
... The use of a control group was less common. Only 14 papers described some kind of controlled experiment, 10 studies conducted controlled studies in which children received one of the conditions [20,21,36,60,100,118,159,196,215,311], and in 4 studies children received both conditions in a counterbalanced order in a comparative approach [68,192,277,333]. A longitudinal approach was used in 11 papers [30,39,48,68,100,115,121,196,214,234,268,347] ranging from one week summer camp for programming [68,234] to two-year Design-Based Research to develop social media tools for children's learning [347]. ...
... However, only a few works have directly compared educational robots and humans (Belpaeme et al., 2018). Some of these have compared robots acting as tutors, instructors or interlocutors to humans (cf. Park et al., 2011;Serholt et al., 2014;Kennedy et al., 2016a;Kory Westlund et al., 2017). Another study compared a robot tutee to a human teacher, as well as to a tablet-only condition (Zhexenova et al., 2020). ...
... With respect to children, studies comparing robots to human instructors/tutors have been conducted with children ranging from the younger ages of 2-5 (Moriguchi et al., 2010;Moriguchi et al., 2011;Kory Westlund et al., 2017), and 6-9 (Chandra et al., 2015;Kennedy et al., 2016a;Zhexenova et al., 2020), to 11-15year olds (Serholt et al., 2014). ...
... The studies with younger children focused on the learning of novel words (Moriguchi et al., 2011;Kory Westlund et al., 2017) or to what extent a robot versus a human could influence children's behavior (Moriguchi et al., 2010). In the latter case, perseverative behaviors in children as influenced by either a robot's or a human's demonstration of how to sort a deck of cards (on video) were investigated. ...
Article
Full-text available
Social robots are increasingly being studied in educational roles, including as tutees in learning-by-teaching applications. To explore the benefits and drawbacks of using robots in this way, it is important to study how robot tutees compare to traditional learning-by-teaching situations. In this paper, we report the results of a within-subjects field experiment that compared a robot tutee to a human tutee in a Swedish primary school. Sixth-grade students participated in the study as tutors in a collaborative mathematics game where they were responsible for teaching a robot tutee as well as a third-grade student in two separate sessions. Their teacher was present to provide support and guidance for both sessions. Participants' perceptions of the interactions were then gathered through a set of quantitative instruments measuring their enjoyment and willingness to interact with the tutees again, communication and collaboration with the tutees, their understanding of the task, sense of autonomy as tutors, and perceived learning gains for tutor and tutee. The results showed that the two scenarios were comparable with respect to enjoyment and willingness to play again, as well as perceptions of learning gains. However, significant differences were found for communication and collaboration, which participants considered easier with a human tutee. They also felt significantly less autonomous in their roles as tutors with the robot tutee as measured by their stated need for their teacher's help. Participants further appeared to perceive the activity as somewhat clearer and working better when playing with the human tutee. These findings suggest that children can enjoy engaging in peer tutoring with a robot tutee. However, the interactive capabilities of robots will need to improve quite substantially before they can potentially engage in autonomous and unsupervised interactions with children.
... Therefore, there have been a growing interest in exploring the benefit of children-robot interaction for educational purposes using social robotics. Social robots support each particular educational need for every child [11,12]. Hence, teachers and education therapists may use social robots as assistants to make their practice easier with children with different needs [13][14][15]. ...
... In particular, we can find several examples of using NAO-type robots with children with attention deficit disorder (ADD) or with speech disorders [12,14,23,25] due to the fact that their communication interaction tools are programmed with predictable and simple functions for the children. ...
... They encourage researchers to make more studies testing different robots, activities and also collecting parents' and teachers' voices [14,15]. Lately, the fact that children can face a robot with less fear than in a human interaction due to the robot's predictability seems to be another reason to promote the use of robots in therapeutic and educational contexts [12,23]. ...
Article
Full-text available
The aim of this study was to explore the potential of using a social robot in speech therapy interventions in children. A descriptive and explorative case study design was implemented involving the intervention for language disorder in five children with different needs with an age ranging from 9 to 12 years. Children participated in sessions with a NAO-type robot in individual sessions. Qualitative methods were used to collect data on aspects of viability, usefulness, barriers and facil-itators for the child as well as for the therapist in order to obtain an indication of the effects on learning and the achievement of goals. The main results pointed out the affordances and possibilities of the use of a NAO robot in achieving speech therapy and educational goals. A NAO can contribute towards eliciting motivation, readiness towards learning and improving attention span of the children. The results of the study showed the potential that NAO has in therapy and education for children with different disabilities. More research is needed to gain insight into how a NAO can be applied best in speech therapy to make a more inclusive education conclusions.
... To the best of our knowledge, only one previous study has compared how children use non-verbal cues for learning across a robot and a human speaker. In this study by Kory Westlund et al. [21], two-to five-year-old children needed to follow a speaker's eye-gaze or bodily orientation to figure out the referent of a novel word. The authors found that children learned new label-referent mappings above chance level, irrespective of whether a robot or human adult administered the task. ...
... First, we used tablets displaying photographs of the objects instead of the actual objects, since the robot we used was not capable of placing real objects on the table. Following Kory Westlund et al. [21], we used two tablets, each displaying one of the photos, to make sure that there was a large enough spatial distance between the two pictures for children to identify which picture the robot pointed at. Photographs of the objects used in Verhagen et al. [14] were presented. ...
... No difference was found in children's reliance on non-verbal cues depending on whether a robot or a human provided these cues. This result aligns with the results by Kory Westlund et al. [21], who found no difference in children's ability to use the non-verbal cues across a robot and a human. ...
Article
Full-text available
Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot’s non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children’s reliance on a robot’s non-verbal behaviors.
... This approach has been taken several times (e.g., References [13,50,61]). Although not common, this output may be processed after recording, for example by pitch shifting an adult voice to make it sound child-like [111,112]. ...
... A more interesting question is: How do robots compare with other technologies in their ability to do so? Studies suggest that, at least for simple vocabulary teaching, robots perform on par with iPads [110], and for that matter, human teachers [110,112]. All three served equally well for transferring knowledge of rudimentary vocabulary. ...
... However, languages run the gamut, including artificial languages such as Toki Pona and ROILA and non-verbal languages such as sign languages. Only four studies have focused on native language development [47,61,72,112]. Differences exist between possible strategies to teach foreign and native languages. For word learning, native language learning requires the mapping of a name to an image of the target, while foreign language learning may only employ mapping a new name to an existing name space. ...
Article
Full-text available
Robot-assisted language learning (RALL) is becoming a more commonly studied area of human-robot interaction (HRI). This research draws on theories and methods from many different fields, with researchers utilizing different instructional methods, robots, and populations to evaluate the effectiveness of RALL. This survey details the characteristics of robots used—form, voice, immediacy, non-verbal cues, and personalization—along with study implementations, discussing research findings. It also analyzes robot effectiveness. While research clearly shows that robots can support native and foreign language acquisition, it has been unclear what benefits robots provide over computer-assisted language learning. This survey examines the results of relevant studies from 2004 (RALL's inception) to 2017. Results suggest that robots may be uniquely suited to aid in language production, with apparent benefits in comparison to other technology. As well, research consistently indicates that robots provide unique advantages in increasing learning motivation and in-task engagement, and decreasing anxiety, though long-term benefits are uncertain. Throughout this survey, future areas of exploration are suggested, with the hope that answers to these questions will allow for more robust design and implementation guidelines in RALL.
... One of the main theoretical perspectives identified on social robots and language and literacy was a robot's capacity to scaffold learning (Kanda et al. 2004;Kennedy et al. 2016;Kory Westlund et al. 2017a) within a child's Zone of Proximal Development (ZPD) (Kory Westlund and Breazeal 2015;Mazzoni and Benvenuti 2015). Scaffolding is the process whereby a more knowledgeable other provides a child prompts and clues to complete a task (Wood et al. 1976). ...
... Other physical features of robots were desired in the reviewed studies because they appealed to young children. Such considerations led some researchers to use the fluffy non-gendered pet-like DragonBot or Tega robot due to its animated facial expressions and 'squash and stretch' movement of its body, head rotation capabilities, and child-like voice (Gordon et al. 2016;Kory Westlund and Breazeal 2015;Kory Westlund et al. 2017a). Some researchers required a robot to move around the classroom or school so the degree of a robot's mobility was an important consideration for their purposes. ...
... Of the 13 reviewed studies (Table 1) some provided only a short time for the child to play with the social robot (e.g., 1 session) whilst others provided more frequent encounters over several weeks. For example, Kory Westlund et al. (2017a) examined children's word learning with social robots in 2 to 5-year-old children (N = 36) using a picture matching activity. A DragonBot and child played an animal picture naming matching task where the time spent with the robot was one 10 to 15-min session. ...
Article
Full-text available
Due to recent advances in technology, social robots are emerging as educational tools with the potential to enhance early language and literacy skills in young children. Social robots are defined as machines that can socially interact and communicate intelligently with humans. A review of the literature was conducted to explore current knowledge on social robots and early language and literacy learning in typically developing children (0 to 8 years old). The database search terms were “social robots” AND (literacy OR language) AND “education”. Twelve databases were searched and 13 studies met the search criteria. Five key themes were identified: A theoretical framework for learning with social robots; Child engagement with social robots; Social robots and language and literacy activities; Social robots and language and literacy learning; and Characteristics of social robots for education. Few studies were found that specifically addressed social robots and early literacy learning. Although social robots were found to support early language learning, further research is needed to investigate social robots and early literacy learning in young children.
... Children appear to rely on visual information when speech is novel (e.g., a label for an unfamiliar object) or unclear (e.g., in the case of referential ambiguity). Studies have found that children use gaze direction, body orientation, and index-finger pointing as cues to learn the reference of novel words from both humans and robots (Baldwin et al., 1996;Grassmann & Tomasello, 2010;Kory Westlund et al., 2017;Verhagen et al., 2019). Recently, Chen et al. (2021) showed that caregivers touched objects more often while naming them when the object was unfamiliar to the child. ...
... Finally, all bodily responses were subdivided into (1) leaning closer to the infant, (2) turning to the infant, (3) turning to the toy, and (4) any affective behaviour (e.g., hugging or touching the infant). Body orientations were included in the coding scheme because they may serve as referential cues when hearing novel speech (e.g., Kory Westlund et al., 2017). For the full coding scheme including definitions, see Appendix A. ...
Article
Full-text available
Caregivers use a range of verbal and nonverbal behaviours when responding to their infants. Previous studies have typically focused on the role of the caregiver in providing verbal responses, while communication is inherently multimodal (involving audio and visual information) and bidirectional (exchange of information between infant and caregiver). In this paper, we present a comprehensive study of caregivers’ verbal, nonverbal, and multimodal responses to 10-month-old infants’ vocalisations and gestures during free play. A new coding scheme was used to annotate 2036 infant vocalisations and gestures of which 87.1 % received a caregiver response. Most caregiver responses were verbal, but 39.7 % of all responses were multimodal. We also examined whether different infant behaviours elicited different responses from caregivers. Infant bimodal (i.e., vocal-gestural combination) behaviours elicited high rates of verbal responses and high rates of multimodal responses, while infant gestures elicited high rates of nonverbal responses. We also found that the types of verbal and nonverbal responses differed as a function of infant behaviour. The results indicate that infants influence the rates and types of responses they receive from caregivers. When examining caregiver-child interactions, analysing caregivers’ verbal responses alone undermines the multimodal richness and bidirectionality of early communication.
... Prior work has shown that children perceive social robots to be closer to human beings than mere machines [2]. Research has suggested that the factors the underlie these perceptions include familiarity, appearance, first interactions, and impressions [22,44]. First, children's familiarity with robots can contribute to a positive user experience with robots. ...
... Other research has shown that appearance, first interaction, and first impressions (i.e., judgements made during the initial encounter based on limited information [45]) to be closely linked and to influence children's perceptions toward robots. Prior work has shown that appearance of the robot, such as facial expressions, non-verbal cues, and physical attributes, can impact children's likability, receptivity, and expectations toward robots [7,44]. These studies have also highlighted the importance of entertaining and engaging first interactions, as these elements serve to ease the tension between Figure 2: Exploration, Design, and Evaluation of the Unboxing Experience -Our exploration and design of children's unboxing experiences involved observations of children's unboxing of a social robot (left) and co-design sessions to design new unboxing experiences (middle). ...
... Prior work has shown that children perceive social robots to be closer to human beings than mere machines [2]. Research has suggested that the factors the underlie these perceptions include familiarity, appearance, first interactions, and impressions [22,44]. First, children's familiarity with robots can contribute to a positive user experience with robots. ...
... Other research has shown that appearance, first interaction, and first impressions (i.e., judgements made during the initial encounter based on limited information [45]) to be closely linked and to influence children's perceptions toward robots. Prior work has shown that appearance of the robot, such as facial expressions, non-verbal cues, and physical attributes, can impact children's likability, receptivity, and expectations toward robots [7,44]. These studies have also highlighted the importance of entertaining and engaging first interactions, as these elements serve to ease the tension between Figure 2: Exploration, Design, and Evaluation of the Unboxing Experience -Our exploration and design of children's unboxing experiences involved observations of children's unboxing of a social robot (left) and co-design sessions to design new unboxing experiences (middle). ...
Preprint
Full-text available
Social robots are increasingly introduced into children's lives as educational and social companions, yet little is known about how these products might best be introduced to their environments. The emergence of the "unboxing" phenomenon in media suggests that introduction is key to technology adoption where initial impressions are made. To better understand this phenomenon toward designing a positive unboxing experience in the context of social robots for children, we conducted three field studies with families of children aged 8 to 13: (1) an exploratory free-play activity ($n=12$); (2) a co-design session ($n=11$) that informed the development of a prototype box and a curated unboxing experience; and (3) a user study ($n=9$) that evaluated children's experiences. Our findings suggest the unboxing experience of social robots can be improved through the design of a creative aesthetic experience that engages the child socially to guide initial interactions and foster a positive child-robot relationship.
... It is evident that RALL oral interactive mechanisms can be multifarious, each specific to the oral communicative goal and context. In most cases, the interactions were based on robotic functions such as (a) speaking [32], (b) making gestures and movements [39], (c) singing [34], (d) object detections [40,41], (e) voice recognition functions [42], and (f) display of digital content on the accompanying tablets [43]. While robots were used to facilitate bi-directional communication by initiating or engaging in verbal, gestural, and physical interactive processes to allow learners to practice receptive (e.g., listening and reading) and productive (e.g., speaking and writing) language use, human facilitators constantly provided procedural, learning, and technical support [34,38] to learners during the interactive tasks. ...
... The cognitive learning outcome of engaging learners in RALL oral interactions was reflected by effective academic achievement [35], increased concentration [35], understanding of new words through pictures, animation, and visual aid [44], and significant improvement in word-picture association abilities [46]. Children also gained the ability in picture naming [41]. In terms of the acquisition of language skills, there was significant improvement in learners' speaking skills [45]. ...
Article
Full-text available
Although educational robots are known for their capability to support language learning, how actual interaction processes lead to positive learning outcomes has not been sufficiently examined. To explore the instructional design and the interaction effects of robot-assisted language learning (RALL) on learner performance, this study systematically reviewed twenty-two empirical studies published between 2010 and 2020. Through an inclusion/exclusion procedure, general research characteristics such as the context, target language, and research design were identified. Further analysis on oral interaction design, including language teaching methods, interactive learning tasks, interaction processes, interactive agents, and interaction effects showed that the communicative or storytelling approach served as the dominant methods complemented by total physical response and audiolingual methods in RALL oral interactions. The review provides insights on how educational robots can facilitate oral interactions in language classrooms, as well as how such learning tasks can be designed to effectively utilize robotic affordances to fulfill functions that used to be provided by human teachers alone. Future research directions point to a focus on meaning-based communication and intelligibility in oral production among language learners in RALL.
... It is, therefore, important to understand which robot behavior can lead to a positive effect on children's engagement. Many studies have investigated the effect of robot behavior on children's engagement, looking at different robot behaviors such as the robot's gestures [10], expressiveness of the voice [26], or the role of the robot [11,27]. De Wit et al. [10] investigated the effect of gestures on 5-year-old children's engagement and found positive effects. ...
... De Wit et al. [10] investigated the effect of gestures on 5-year-old children's engagement and found positive effects. Kory-Westlund et al. [26] found that 5-year-old children were more engaged with a robot exhibiting expressive behaviors. A recent study showed that 5-to-7-year old children who interacted with a robot acting as a peer showed more affect during the interaction than when interacting with a robot acting like a tutor [27]. ...
Article
Full-text available
In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback types by the robot: adult-like feedback, peer-like feedback and no feedback. Additionally, we investigated the relation between children’s eye gaze fixations and their task engagement and robot engagement. Fifty-eight Dutch children participated in an English counting task with a social robot and physical blocks. We found that, overall, children in the three conditions showed similar task engagement and robot engagement; however, within each condition, they showed large individual differences. Additionally, regression analyses revealed that there is a relation between children’s eye-gaze direction and engagement. Our findings showed that although eye gaze plays a significant role in measuring engagement and can be used to model children’s task engagement and robot engagement, it does not account for the full concept and engagement still comprises more than just eye gaze.
... Most of these studies have involved embodied conversational agents, such as robots or onscreen intelligent avatars. For example, Westlund et al. (2017) found that children learned unfamiliar words equally well with a robot or a human interlocutor. Hong et al. (2016) demonstrated that incorporating a robot teaching assistant in a classroom led to similar levels of reading and writing improvement as compared to having a human assistant. ...
... This is in line with the emerging body of research demonstrating the potential benefits of artificially intelligent learning companions. However, in contrast to prior research on these benefits that typically involved robots (e.g., Breazeal et al., 2016;Westlund et al., 2017), the conversational agent used in our study was disembodied and thus not capable of utilizing non-verbal expressions to facilitate the dialogue. That this agent, with only a voice interface, can benefit children's story comprehension as much as face-to-face human partners reinforces the importance of verbal dialogue in promoting children's language skills laid out in Vygotsky's (2012) theory. ...
Article
Full-text available
Dialogic reading, when children are read a storybook and engaged in relevant conversation, is a powerful strategy for fostering language development. With the development of artificial intelligence, conversational agents can engage children in elements of dialogic reading. This study examined whether a conversational agent can improve children's story comprehension and engagement, as compared to an adult reading partner. Using a 2 (dialogic reading or non‐dialogic reading) × 2 (agent or human) factorial design, a total of 117 three‐ to six‐year‐olds (50% Female, 37% White, 31% Asian, 21% multi‐ethnic) were randomly assigned into one of the four conditions. Results revealed that a conversational agent can replicate the benefits of dialogic reading with a human partner by enhancing children's narrative‐relevant vocalizations, reducing irrelevant vocalizations, and improving story comprehension.
... While the development of children's sharing behavior has received extensive attention in the literature (for reviews see Kuhlmeier, Dunfield, & O'Neill, 2014;Martin & Olson, 2015), one contemporary aspect has not been acknowledged until now: children growing up today are not just interacting with other children and adults, they are also faced with a multitude of technological and digital agents, including robots. Robots are starting to appear in children's daily lives as household tools, toys, and educational assistants (e.g., Fridin, 2014;Kory Westlund et al., 2017;Yu & ✩ Acknowledgments: We thank Kim Lien van der Schans for her help with stimuli creation and Milou Huijsmans for her assistance with data collection. We thank Jellie Sierksma and Tessa Lansu for their helpful comments on earlier versions of this manuscript. ...
... This is a remarkable finding in itself. While previous research has shown that children anthropomorphize robots (e.g., Chernyak & Gary, 2016), learn from them, (Kory Westlund et al., 2017), imitate them to some extent (Sommer et al., 2020), attribute goals to their movements , help them , attribute moral concern to them to some extent (Kahn et al., 2012;Sommer et al., 2019), and are influenced by them in their decision-making (Vollmer, Read, Trippas, & Belpaeme, 2018), this is the first time we have evidence of Fig. 2. Children in both age groups attributed more anthropomorphic qualities to the robot that was described as having affective states compared to the robot that was introduced as having no emotional capacity, thus confirming the effect of our affective state manipulation. Fig. 3. ...
Article
Sharing helps children form and maintain relationships with other children. Yet, children born today interact not only with other children, but increasingly with robots as well. Little is known on whether and how children treat robots as recipients of prosocial acts. We thus investigated children’s sharing behavior towards robots. Specifically, we assessed the effect of anthropomorphic appearance and affective state attributions. Children (4–9 years old; n = 120) were introduced to robots that varied in the extent to which they looked human-like. Children’s perceptions of the robots’ affective states were manipulated by explicitly demonstrating one robot as having feelings and the other one not. Subsequently, children’s sharing behavior towards and feelings about sharing with these robots were measured. Results indicate that there was no effect of anthropomorphic appearance on sharing behavior. However, importantly, children in both age groups shared more resources with a robot that they attributed with affective states, and expressed more positive emotional judgments about sharing with that robot as well. An exploratory mediation analysis further revealed that children’s positive feelings about sharing guided their actual sharing behavior with robots. In sum, children show more pro-social behavior when they believe a robot can feel.
... The use of social robots in education has been found very useful in personalizing each child's needs supporting individual learning across various domains [23,24]. Therefore, teachers and education therapists are considering humanoid robots as a useful tool for their practice [25][26][27]. ...
... Previous studies have shown that children with special needs tend to interact smoothly with robots as they are more predictable than human beings and do not express emotions, however, most of these studies were carried out with children with autism spectrum disorders (ASD). The use of humanoid robots such as NAO-type robots with children with attention deficit disorder (ADD), without or with hyperactivity (ADHD), and with speech language disorders such as dyslalias or dyslexias, is scarce [21,22,24,26]. In these cases, normally a support such as video and audio content are mostly used as a teaching media [32,33]. ...
Article
Full-text available
The effectiveness of social robots such as NAO in pedagogical therapies presents a challenge. There is abundant literature focused on therapies using robots with children with autism, but there is a gap to be filled in other educational different needs. This paper describes an experience of using a NAO as an assistant in a logopedic and pedagogical therapy with children with different needs. Even if the initial robot architecture is based on generic behaviors, the loading and execution time for each specific requirement and the needs of each child in therapy, made it necessary to develop "Adaptive Behaviors". These evolve into an adaptive architecture, applied to the engineer-therapist-child interaction, requiring the engineer-programmer to be always present during the sessions. Benefits from the point of view of the therapist and the children and the acceptance of NAO in therapy are shown. A robot in speech-therapy sessions can play a positive role in several logopedic aspects serving as a motivating factor for the children. Future works should be oriented in developing intelligent algorithms so as to eliminate the presence of the engineer-programmer in the sessions. Additional work proposals should consider deepening the psychological aspects of using humanoid robots in educational therapy.
... In fact, children perceive voice assistants as a less appealing and pleasurable interaction partner (Sinoo et al., 2018). Some social cues seem to play an important role in the communicative expectations of children in interactions with robots: children are particularly attentive and receptive to robots with high non-verbal contingency (Breazeal et al., 2016) and an expressive narrative style (Westlund et al., 2017) and show longer engagement when robots adapt to their affective states (Ahmad et al., 2017). ...
... Evidence for social interactions with robots shows that children quickly form social bonds with them (Tanaka et al., 2007) and treat them socially (e.g., hugging, handshaking, joint attention, prosocial behaviour, social conformity) (Kahn et al., 2012;Kim et al., 2009;Melson et al., 2009;Vollmer et al., 2018). Children are particularly engaged when interacting with a robot with high non-verbal contingency (Breazeal et al., 2016), an expressive narrative style (Westlund et al., 2017) and the ability to adapt to their affective states (Ahmad et al., 2017). The social agency theory (Mayer et al., 2003) argues that the quality of social interaction increases with the number of social characteristics an artificial agent possesses because humans tend to interpret the interaction as a social communicative situation, which leads to a deeper processing of the information presented by the agent. ...
Article
Full-text available
The growing prevalence of artificial intelligence and digital media in children's lives provides them with the opportunity to interact with novel non-human agents such as robots and voice assistants. Previous studies show that children eagerly adopt and interact with these technologies, but how do children distinguish between artificial intelligence and humans? Are voice assistants similar to humans as communicative and social interaction partners despite their features being very limited? In this study, the communication patterns and prosocial outcomes of interactions with voice assistants were investigated. Children between 5 and 6 years (N = 72) of age solved a treasure hunt in either a human or voice assistant condition. During the treasure hunt, the interaction partner supplied information either about their knowledge of or experience with the objects. Afterwards, children were administered a sharing task and a helping task. Results revealed that children provided voice assistants with less information than humans. Sharing was influenced by the type of information shared in the human condition but not in the voice assistant condition. Overall, these results suggest that children do not impose the same expectations on voice assistants as they do on humans. Consequently, cooperation between humans and cooperation between humans and computers differ.
... It is thought that a large amount of the meaning that individuals transmit to one another is accounted for by the many nonverbal communication modes that people use. For example, nonverbal clues that robots use to teach kids new words [67]. So, the possibility of creating social robots [13] robots that can engage with people better-becomes apparent when nonverbal interaction skills are added to robots. ...
Article
Full-text available
Robotics as a highlight of artificial intelligence is due to its intrinsic involvement with the physical world, from home to workplace, as they are widespread appearances in our life. As humans do not have to worry about robots replacing them on a large scale, thinking and working with these machines will bring some advantage to us. For example, the fully autonomous transportation of people and goods may be rather simple or an extremely tedious process. However, the interaction with robots as guides, companions, team members may be more complex and troublesome. In fact, increasingly, people come to use robots with interfaces that are transparent in nature, and they make humans feel generally comfortable when interacting with them. It won't be long before humans and robots will have a much closer relationship, which will have implications for our lives and for society in general. They are verbal and non-verbal communication, mutual understanding and learning, and the necessity of dealing with ethical issues that are addressed in the article, which also highlights the current development and future direction of research in human-centered robotics. INTRODUCTION While it was taught in school thirty years ago that automation of facilities was displacing human workers, it was also noticed at the same time that working profiles were changing and that new types of work were being created as a result of this development, so the effect was more of a shift in industry than a simple displacement of jobs. This discussion has recently reignited as it has become clear that AI systems are becoming more and more capable in several fields that were hitherto solely amenable to human cognition and intellect. For instance, there is great excitement and anxiety about the future of society when robots and AI
... The primary considerations of social robots' impact on children's learning and development are whether children humanize social robots, perceive them as social beings, exhibit empathy for them, and interact with them differently than they do with other people or objects. Previous studies have found that children believe that social robots have a mental state, perceive robots as social others, and are able to respond to robots' social behaviors as they do to humans' (Melson et al., 2009;Breazeal et al., 2016;Westlund et al., 2017). Nevertheless, children's perceptions are somewhat contradictory. ...
Article
Full-text available
The presence of social robots in children’s daily environments has steadily increased. With the advancement of artificial intelligence (AI), social robots have influenced children’s learning and development. This study innovatively utilized the Web of Science database and conducted a bibliometric analysis of 517 publications on social robots supporting children’s learning and development before September 2022. Unlike most existing reviews, this study employed a synergistic combination of two complementary visualization tools, VOSviewer and CiteSpace, to map the intellectual structure and analyze the knowledge evolution path in this emerging interdisciplinary field. Specifically, VOSviewer generated visualizations depicting collaboration networks, research hotspots, and trends based on co-occurrences. CiteSpace enabled quantitative measurements of node centrality and burstness to reveal pivotal entities and emerging topics. Combining visual mapping and quantitative analysis by VOSviewer and CiteSpace allowed comprehensive landscape mapping for an in-depth investigation into the development of this field. This study proposes future research directions, including children’s perceptions of social robots, social robots enhancing children’s learning, social robots supporting children’s social and emotional development, and social robots for children with special needs. The findings also inform the design and application of child-friendly social robots equipped with generative AI techniques.
... Recently, several business partners created a number of humanoid robots and investigated ways to use them to close the gap in education especially English for foreign language [37]. However, getting humanoid robots on the market for the general public has been signifcantly hampered by the expensive cost of production. ...
Article
Full-text available
This study aimed to find out the prototype of an EVOCE robot that gave impact of students’ English vocabularies enhancement. In this study, the authors used two types of research design, namely, prototyping method and experiment research design. The mixed method is applied in this research qualitatively and quantitatively. Both vocabulary tests and observational methods of prototyping data collection were employed. The robot prototype used in the class as a tool to examine how was the effect of this media helped young learners in acquiring basic English vocabulary. The prototype has been through the test and the result showed that it suited with the needs for young learners. Nonprobability sampling was the technique used. A total of 40 students from two classes made up the research sample. The vocabulary score achievement pre and post test using EVOCE robot was compared with data analysis and t-test. The findings of value of t-stat > t-table at the significant level of 5% (1,679 > 1,328) meaning that the robot helped students become better acquisition in getting new vocabulary and influenced their level of vocabulary score. As the result, this research therefore has consequences for the teacher’s understanding that the usage of robots can both increase students’ vocabulary and also have an impact on their level of English proficiency.
... Similarly, preschoolers learn new words and facts from robots (Breazeal et al., 2016) especially robots they consider human-like (Brink & Wellman, 2020). Preschoolers will even refer to a robot's non-verbal cues to learn new words (Westlund et al., 2017). Four to 6-year-old children will also imitate a robot's irrelevant actions to achieve a goal (i.e., overimitation) as an indication of non-linguistic cultural learning (Sommer et al., 2020). ...
Article
The idea of treating robots as free agents seems only to have existed in the realm of science fiction. In our current world, however, children are interacting with robotic technologies that look, talk, and act like agents. Are children willing to treat such technologies as agents with thoughts, feelings, experiences, and even free will? In this paper, we explore whether children's developing concepts of agency and free will apply to robots. We first review the literature on children's agency and free-will beliefs, particularly looking at their beliefs about volition, responding to constraints , and deliberation about different options for action. We then review an emerging body of research that investigates children's beliefs about agency and free will in robots. We end by discussing the implications for developing beliefs about agency and free will in an increasingly technological world.
... Similarly, Westlund et al. (2017), have shown that children aged 4 to 6 can learn new words from both a human, a tablet, and a robot. In their study, children were exposed to one informant at a time and learned six new words from each. ...
Article
Full-text available
In this paper, we investigated whether Canadian preschoolers prefer to learn from a competent robot over an incompetent human using the classic trust paradigm. An adapted Naive Biology task was also administered to assess children’s perception of robots. In Study 1, 3-year-olds and 5-year-olds were presented with two informants; A social, humanoid robot (Nao) who labeled familiar objects correctly, while a human informant labeled them incorrectly. Both informants then labeled unfamiliar objects with novel labels. It was found that 3-year-old children equally endorsed the labels provided by the robot and the human, but 5-year-old children learned significantly more from the competent robot. Interestingly, 5-year-olds endorsed Nao’s labels even though they accurately categorized the robot as having mechanical insides. In contrast, 3-year-old children associated Nao with biological or mechanical insides equally. In Study 2, new samples of 3-year-olds and 5-year-olds were tested to determine whether the human-like appearance of the robot informant impacted children’s trust judgments. The procedure was identical to that of Study 1, except that a non-humanoid robot, Cozmo, replaced Nao. It was found that 3-year-old children still trusted the robot and the human equally and that 5-year-olds preferred to learn new labels from the robot, suggesting that the robot’s morphology does not play a key role in their selective trust strategies. It is concluded that by 5 years of age, preschoolers show a robust sensitivity to epistemic characteristics (e.g., competency), but that younger children’s decisions are equally driven by the animacy of the informant.
... Children respond to the robot's use of human-like social behaviour: e.g., they readily listen to and speak with the robots, and attend to the robot's posture and facial expressions. They adjust their speech and behavior to communicate with robots during learning tasks (Batliner et al. 2011;Freed 2012;Kanda et al. 2004) and follow a robot's gaze direction to, e.g., figure out what the robot is talking about Kory-Westlund et al. 2015;Kory-Westlund et al. 2017a;Meltzoff et al. 2010). Children also respond to relationship-building behaviours: children mirror emotional expressions such as smiles and other behaviors such as head tilts and word use (Chen et al. 2020;Gordon et al. 2016;Kory-Westlund et al. 2017b); help the robots with tasks, take turns, and show affection such as hugs and gentle touches (Jeong et al. 2018;Kory-Westlund and Breazeal 2019b;Park et al. 2014); and disclose personal information such as their names, favorite colors, and stories about themselves (Kory-Westlund et al. 2018;Kory-Westlund and Breazeal 2019c). ...
... Robots have an ability to repeat the same set of words and actions over and over again, which can help children with communication disorders to remember and use the learned vocabulary in everyday life [1]. Robots could support the learning of children [13], could be personalized to the child needs, could reduce the teachers' workload, could complement, improve and even replace the work of therapists, especially in situations when there is a lack of therapists or access to kindergartens or schools like the one we had during the Covid-19 pandemic [9,1]. The lack of therapists could be critical for children with communication disorders. ...
Conference Paper
This paper presents robot-based Cyber-Physical System (CPS) for language therapy for children with communication disorders that was created and experimented in frame of the CybSPEED H2020-MSCA-RISE project. The CPS includes humanoid and semi-humanoid robots and is based on a combination of the best of experience, achievements and practices of the researchers from the domains of robotics, AI, system science and speech therapists. All experiments were conducted in accordance with the experimental proto-col for measuring of listening, understanding and speaking skills for the verbal language in children with communication disorders, approved by the Ethics Committee for Scientific Research (ECSR) of the IR-BAS. In the pa-per, detailed description of the CPS, the organization and conduction of the experiments are given. The first published results are commented which con-firm the effectiveness of the robots - based CPS for the children with communication disorders.
... RALL is gradually becoming a commonly studied field of human-robot interaction. Research has obviously indicated that RALL can support both native and foreign language acquisition [31,32]. Randall [33] defined RALL as "the use of robots to teach people language expression or comprehension skills-such as speaking, writing, reading, or listening" (p. 1). ...
Article
Full-text available
This action research created an application system using robots as a tool for training English-language tour guides. It combined artificial intelligence (AI) and virtual reality (VR) technologies to develop content for tours and a 3D VR environment using the AI Unity plug-in for programming. Students learned to orally interact with the robot and act as a guide to various destinations. The qualitative methods included observation, interviews, and self-reporting of learning outcomes. Two students voluntarily participated in the study. The intervention lasted for ten weeks. The results indicated the teaching effectiveness of robot-assisted language learning (RALL). The students acknowledged the value of RALL and had positive attitudes toward it. The contextualized VR learning environment increased their motivation and engagement in learning, and students perceived that RALL could help develop autonomy, enhance interaction, and provide an active learning experience. The implications of the study are that RALL has potential and that it provides an alternative learning opportunity for students.
... It is an important issue as multiple nonverbal communication modalities do exist between humans and they are estimated to represent a significant part of communicated meaning between humans. Non verbal cues revealed for instance to help children to learn new words from robots [67]. Adding nonverbal interaction abilities to robots thus opens the perspective of building robots that can better engage with humans [68], i.e. social robots [13]. ...
Article
Full-text available
Robotics has a special place in AI as robots are connected to the real world and robots increasingly appear in humans everyday environment, from home to industry. Apart from cases were robots are expected to completely replace them, humans will largely benefit from real interactions with such robots. This is not only true for complex interaction scenarios like robots serving as guides, companions or members in a team, but also for more predefined functions like autonomous transport of people or goods. More and more, robots need suitable interfaces to interact with humans in a way that humans feel comfortable and that takes into account the need for a certain transparency about actions taken. The paper describes the requirements and state-of-the-art for a human-centered robotics research and development, including verbal and non-verbal interaction, understanding and learning from each other, as well as ethical questions that have to be dealt with if robots will be included in our everyday environment, influencing human life and societies.
... Since then, many works were done to investigate benefits [1][2][3] and limitations [4] of using robots in educational process. Recent research prove that using robot could be helpful when learning foreign language [5], learning new words [6], on speech therapy [7] and for inclusive learning [8,9]. ...
Chapter
Full-text available
New social robot platform for using at educational purposes presented. The robot proposed looks like a cat with paws, ears and moustache. Robot could speak, process speech, process video. Main concept of robot was simple and robust construction, low prime cost and visual appeal for children. First prototype of robot was based on Raspberry Pi and commercially available peripheral components: camera, servos, microphones and LEDs. Basic design principles, hardware and software requirements were described. Numerical parameters were presented, such as speech generation time, sign detection time, etc. Four emotional states of the robot were developed, such as ‘happiness’, ‘sadness’, ‘confusion’ and ‘smirk’. With use of developed speech and video processing modules and emotional states five child-robot interaction scenarios were implemented and then presented to kids on exhibitions. Robot attracted kids’ attention. Kids had positive reactions to the robot and described it as friendly and nice.
... They represent an emerging field of research focused on developing a social intelligence that aims to maintain the illusion of dealing with a human being [5]. Thanks to their ability to interact with humans naturally and familiarly, social robots are spreading more and more often into human life not only for entertainment purposes, but also to support users in their daily activities, or in teaching and educational settings [6,7]. ...
Chapter
Social robots are autonomous entities able to engage humans at the emotional and social level. They are being used in several domains, especially in those where kids are the primary users (i.e., education, games, rehabilitation). The paper presents an experience in which the social robot Pepper is used as a storyteller. A storyteller robot should engage humans by combining its verbal and non-verbal behaviors and ‘immerse’ the user into the story. Therefore, to design an engaging and effective storytelling experience we started to address a first design issue: does a human voice have an advantage over a synthesized voice of the robot in this context? To this aim, two versions of the same story for kids from 8 to 9 y.o. have been developed. The social robot Pepper was used to tell the story in two modalities. In the first modality, Pepper storyteller was designed as a kind of audiobook in which the robot had just the role of a device, but the story was narrated by a human voice; in the second modality, Pepper was designed to tell the story using its own voice combined with non-verbal behaviors. The system has been tested in a real context and results show that Pepper’s voice affected more positively the children’s emotional experience, also by giving the children the perception that they learn more easily.
... For example, social robots allow for interactions that make use of the physical environment (e.g., acting upon objects, enacting particular movements or operations, using various types of gestures) and they can stimulate more natural, human-like interactions because of their humanoid appearance (Belpaeme et al., 2018;van den Berghe et al., 2019). The use of iconic gestures is known to support L2 vocabulary learning (Tellier, 2008;Macedonia et al., 2011;Rowe et al., 2013), and a robot's iconic gestures and other non-verbal cues have been found to benefit learners as well (Kory Westlund et al., 2017;de Wit et al., 2018). ...
Article
Full-text available
The current study investigated how individual differences among children affect the added value of social robots for teaching second language (L2) vocabulary to young children. Specifically, we investigated the moderating role of three individual child characteristics deemed relevant for language learning: first language (L1) vocabulary knowledge, phonological memory, and selective attention. We expected children low in these abilities to particularly benefit from being assisted by a robot in a vocabulary training. An L2 English vocabulary training intervention consisting of seven sessions was administered to 193 monolingual Dutch five-year-old children over a three- to four-week period. Children were randomly assigned to one of three experimental conditions: 1) a tablet only, 2) a tablet and a robot that used deictic (pointing) gestures (the no-iconic-gestures condition), or 3) a tablet and a robot that used both deictic and iconic gestures (i.e., gestures depicting the target word; the iconic-gestures condition). There also was a control condition in which children did not receive a vocabulary training, but played dancing games with the robot. L2 word knowledge was measured directly after the training and two to four weeks later. In these post-tests, children in the experimental conditions outperformed children in the control condition on word knowledge, but there were no differences between the three experimental conditions. Several moderation effects were found. The robot’s presence particularly benefited children with larger L1 vocabularies or poorer phonological memory, while children with smaller L1 vocabularies or better phonological memory performed better in the tablet-only condition. Children with larger L1 vocabularies and better phonological memory performed better in the no-iconic-gestures condition than in the iconic-gestures condition, while children with better selective attention performed better in the iconic-gestures condition than the no-iconic-gestures condition. Together, the results showed that the effects of the robot and its gestures differ across children, which should be taken into account when designing and evaluating robot-assisted L2 teaching interventions.
... Anecdotally, children in the current study expressed great excitement about the robot tutor, and a recent review also emphasizes high enjoyment and anthropomorphic tendencies for robots in children in our age range (Ahmad et al., 2019;van Straten et al., 2020). A study similarly did not find an effect of tutor type, but showed that children gazed more at a robot tutor than a human tutor (Westlund et al., 2017). Another study with 10-to 13-year-olds also found more frequent gaze toward a robot compared to a human (Serholt and Barendregt, 2016). ...
Article
Full-text available
Social robots are receiving an ever-increasing interest in popular media and scientific literature. Yet, empirical evaluation of the educational use of social robots remains limited. In the current paper, we focus on how different scaffolds (co-speech hand gestures vs. visual cues presented on the screen) influence the effectiveness of a robot second language (L2) tutor. In two studies, Turkish-speaking 5-year-olds (n = 72) learned English measurement terms (e.g., big, wide) either from a robot or a human tutor. We asked whether (1) the robot tutor can be as effective as the human tutor when they follow the same protocol, (2) the scaffolds differ in how they support L2 vocabulary learning, and (3) the types of hand gestures affect the effectiveness of teaching. In all conditions, children learned new L2 words equally successfully from the robot tutor and the human tutor. However, the tutors were more effective when teaching was supported by the on-screen cues that directed children's attention to the referents of target words, compared to when the tutor performed co-speech hand gestures representing the target words (i.e., iconic gestures) or pointing at the referents (i.e., deictic gestures). The types of gestures did not significantly influence learning. These findings support the potential of social robots as a supplementary tool to help young children learn language but suggest that the specifics of implementation need to be carefully considered to maximize learning gains. Broader theoretical and practical issues regarding the use of educational robots are also discussed.
... Recent reviews on the interactions between neuro-typical children and a robot Neumann, 2020;van Straten et al., 2020) indicate that only one study was conducted using NAO and a group of children from 2 to 8 years-old (Yasumatsu et al., 2017). The few other studies conducted on 2 years-old either used the tiny humanoid robot QRIO that is smaller than a 2 years-old child (Tanaka et al., 2007), the iRobiQ robot that looks more like a toy (Hsiao et al., 2015), or robots specifically designed to be enjoyed by young children like the stuffed dragon robot Dragonbot (Kory Westlund et al., 2017) and the RUBI-4 (Movellan et al., 2009). Thus, should we decide to do a longitudinal study from 2 to 9 years-old using our contextual procedure we would need to study which robot is the most relevant to play the role of a rather slow and ignorant being for all ages. ...
Article
Full-text available
The poor performances of typically developing children younger than 4 in the first-order false-belief task “Maxi and the chocolate” is analyzed from the perspective of conversational pragmatics. An ambiguous question asked by an adult experimenter (perceived as a teacher) can receive different interpretations based on a search for relevance, by which children according to their age attribute different intentions to the questioner, within the limits of their own meta-cognitive knowledge. The adult experimenter tells the child the following story of object-transfer: “Maxi puts his chocolate into the green cupboard before going out to play. In his absence, his mother moves the chocolate from the green cupboard to the blue one.” The child must then predict where Maxi will pick up the chocolate when he returns. To the child, the question from an adult (a knowledgeable person) may seem surprising and can be understood as a question of his own knowledge of the world, rather than on Maxi's mental representations. In our study, without any modification of the initial task, we disambiguate the context of the question by (1) replacing the adult experimenter with a humanoid robot presented as “ignorant” and “slow” but trying to learn and (2) placing the child in the role of a “mentor” (the knowledgeable person). Sixty-two typical children of 3 years-old completed the first-order false belief task “Maxi and the chocolate,” either with a human or with a robot. Results revealed a significantly higher success rate in the robot condition than in the human condition. Thus, young children seem to fail because of the pragmatic difficulty of the first-order task, which causes a difference of interpretation between the young child and the experimenter.
... In one recent study, researchers had a group of children between the ages of 2 and 5 learn about animal identification from either a person or a robot, and detected no difference in how much the children learned. 50 In a related study, robots read aloud to groups of children ages 4 to 7 in different tones of voice. 51 While the robot's tone of voice did not have much impact on the children's learned vocabulary, it did affect how well the children appeared to remember and retell the story. ...
... On the other hand, learning analytics provides tools to analyze and model data streams, and to provide insights on the learning outcomes beyond the simple pre-posttest analysis. Some recent works in robots in education have started to use learning and interaction logs to extract learners' strategies in a problem-solving task [48] and or to model learner's behavior in a literacy scenario [49]. ...
Article
Full-text available
Purpose of Review With the growth in the number of market-available social robots, there is an increasing interest in research on the usage of social robots in education. This paper proposes a summary of trends highlighting current research directions and potential research gaps for social robots in education. We are interested in design aspects and instructional setups used to evaluate social robotics system in an educational setting. Recent Findings The literature demonstrates that as the field grows, setup, methodology, and demographics targeted by social robotics applications seem to settle and standardize—a tutoring Nao robot with a tablet in front of a child seems the stereotypical social educational robotics setup. Summary An updated review on social robots in education is presented here. We propose, first, an analysis of the pioneering works in the field. Secondly, we explore the potential for education to be the ideal context to investigate central human-robot interaction research questions. A trend analysis is then proposed demonstrating the potential for educational context to nest impactful research from human-robot interaction.
... Social interaction appeared to be influential in robotic education quality. This finding supported previous studies conducted in other countries (Breazeal et al. 2016;Fong et al. 2003;Westlund et al. 2017aWestlund et al. , 2017b. Familiarity and novelty features also indicated a strong relationship with social dimension in teachers' perspectives. ...
Article
Full-text available
Educational robots have been used in many countries as teaching assistants in elementary schools but robotic education quality is not well established in Thailand. The primary objective of this study was to identify and confirm quality dimensions in robotic education from the teachers’ perspectives. The sample size was 510 teachers who were observed in Thai elementary schools. Confirmatory Factor Analysis (CFA) indicated a good fit of a six-factor model to the observed data. The construct of CFA revealed six dimensions of robotic education quality as Social interaction, Cognitive function, Teaching method, Learner characteristics, Main features and Content. Results were similar to previous studies. Prototype development of an educational robot was proposed in relation to the Thai educational context. Further research, including large random comparative studies, needs to be performed.
Chapter
With its ability to combine cognitive psychology, data analytics, and machine learning, artificial intelligence (AI) holds great promise for improving academic performance and outcomes through customized learning experiences. The field of teaching and learning could be completely transformed by artificial intelligence (AI). Teachers can better prepare students for success in the digital age, improve student outcomes, and personalize learning by utilizing AI-powered tools and strategies. This chapter gave an overview of the state of education today and how artificial intelligence (AI) can improve it. Applications of AI in online learning environments are also covered. A few AI tools with educational applications were mentioned along with their functions. A few e-learning programs that currently employ AI were also mentioned. There was also discussion about other technologies that, aside from artificial intelligence, will elevate the bar for education.
Article
Full-text available
Some meta-analyses have confirmed the efficacy of technology-enhanced vocabulary learning. However, they have not delved into the specific ways in which technology-based activities facilitate vocabulary acquisition, or into first-language vocabulary learning. We conducted a systematic review that retrieved 1,221 journal articles published between 2011 and 2023, of which 40 met our inclusion criteria. Most of the sampled studies focused on teaching receptive vocabulary knowledge and vocabulary breadth. All utilized cognitive strategies. Their common design features included noticing and receptive or productive retrieval, and most implicitly drew upon dual-coding theory. Our findings highlight the need for a balanced approach to vocabulary learning, encompassing both vocabulary breadth and depth, as well as receptive and productive knowledge. They also suggest that affective and social learning strategies should be promoted alongside the cognitive ones that are currently dominant. Additionally, our identification of commonly and rarely used design features can guide curriculum designers to develop more effective tools. Lastly, we argue that the design of technology-enhanced learning should be theory-driven.
Article
Full-text available
While numerous studies of robot-assisted language learning (RALL) for English-as-a-foreign-language (EFL) learners' language skill development have been done, a comprehensive and theoretically-driven meta-analysis on its effects is still in paucity. To fill the gap, drawing on Activity Theory (AT), this study reported a meta-analysis from 47 independent studies out of 29 literature samples involving 1791 EFL learners on RALL for language skill development published during 2004-2023. The results indicated that the overall effect size was g = .69, 95% CI [.49, .90], suggesting that RALL outperforms non-RALL conditions. In addition, educational levels and intervention durations were found to be significant moderators. Based on the results, implications for practice were discussed.
Article
Although robots’ social behaviors are known for their capacity to facilitate learner–robot interaction for language learning, their application and effect have not been adequately explored. This study reviewed 59 empirical articles to examine the contexts and application of various social behaviors of robots for language learning, and conducted a meta-analysis of 18 study samples to evaluate the effect of robots’ social supportive behaviors on language learning achievement. Results indicate that robots’ social behaviors have mostly been applied in the studies with K–12 students, for learning vocabulary in English, including small sample sizes of below 80 participants, and lasting for one session. Second, various verbal and non-verbal behaviors of robots have been identified and applied, showing mixed results on language learning achievement. Third, robots’ social supportive behaviors have produced a positive effect on language learning achievement compared to neutral behaviors (g = 0.269). Finally, detailed suggestions for future research are discussed.
Chapter
Full-text available
Essays on the challenges and risks of designing algorithms and platforms for children, with an emphasis on algorithmic justice, learning, and equity. One in three Internet users worldwide is a child, and what children see and experience online is increasingly shaped by algorithms. Though children's rights and protections are at the center of debates on digital privacy, safety, and Internet governance, the dominant online platforms have not been constructed with the needs and interests of children in mind. The editors of this volume, Mizuko Ito, Remy Cross, Karthik Dinakar, and Candice Odgers, focus on understanding diverse children's evolving relationships with algorithms, digital data, and platforms and offer guidance on how stakeholders can shape these relationships in ways that support children's agency and protect them from harm. This book includes essays reporting original research on educational programs in AI relational robots and Scratch programming, on children's views on digital privacy and artificial intelligence, and on discourses around educational technologies. Shorter opinion pieces add the perspectives of an instructional designer, a social worker, and parents. The contributing social, behavioral, and computer scientists represent perspectives and contexts that span education, commercial tech platforms, and home settings. They analyze problems and offer solutions that elevate the voices and agency of parents and children. Their essays also build on recent research examining how social media, digital games, and learning technologies reflect and reinforce unequal childhoods. Contributors:Paulo Blikstein, Izidoro Blikstein, Marion Boulicault, Cynthia Breazeal, Michelle Ciccone, Sayamindu Dasgupta, Devin Dillon, Stefania Druga, Jacqueline M. Kory-Westlund, Aviv Y. Landau, Benjamin Mako Hill, Adriana Manago, Siva Mathiyazhagan, Maureen Mauk, Stephanie Nguyen, W. Ian O'Byrne, Kathleen A. Paciga, Milo Phillips-Brown, Michael Preston, Stephanie M. Reich, Nicholas D. Santer, Allison Stark, Elizabeth Stevens, Kristen Turner, Desmond Upton Patton, Veena Vasudevan, Jason Yip
Article
Full-text available
Purpose The aim of this study is to examine the existing literature on service robots in order to identify prominent themes, assess the present state of service robotics research and highlight the contributions of seminal publications in the business, management and hospitality domain. Design/methodology/approach This study analysed 332 Scopus papers from 1985 to 2022 using bibliometric techniques like citation and co-citation analysis. Findings The study findings highlighted that there has been a consistent rise in publications related to service robots. The paper identifies three significant themes in the service robot literature: adoption of service robots in the context of customer service, anthropomorphism and integration of artificial intelligence in robotic service. Furthermore, this study highlights prominent authors, journals, institutions and countries associated with research on service robots and discusses the future research opportunities in this domain. Originality/value This study contributes to the service robots’ literature in the hospitality context by compilation of various reference materials using a comprehensive bibliometric analysis. Previous studies do not point out crucial themes in this area, nor do they provide an overview of prominent journals, institutions, authors and trends in this field. Therefore, this study attempts to fill the lacunae.
Article
With rapid advances in Artificial Intelligence (AI) over the last decade, schools have increasingly employed innovative tools, intelligent applications and methods that are changing the education system with the aim of improving both user experience and learning gain in the classrooms. Even though the use of AI to education is not new, it has not unleashed its full potential yet. Much of the available research looks at educational robotics and at non-intelligent robots in education. Only recently, research has sought to assess the potential of Socially Assistive Robots (SARs), including humanoids, within the domain of classroom learning, particularly in relation to learning languages. Yet, the use of this form of AI in the field of mathematics and science constitutes a notable gap in this field. This study aims to critically review the research on the use of SARs in the pre-tertiary classroom teaching of mathematics and science. Further aim is to identify the benefits and disadvantages of such technology. Databases' search conducted between January and April 2018 yielded twenty-one studies meeting the set inclusion criteria for our systematic review. Findings were grouped into four major categories synthesising current evidence of the contribution of SARs in pre-tertiary education: learning gain, user experience, attitude, and usability of SARs within classroom settings. Overall, the use of SARs in pre-tertiary education is promising, but studies focussing on mathematics and science are significantly under-represented. Further evidence is also required around SARs' specific contributions to learning more broadly, as well as enabling/impeding factors, such as SAR's personalisation and appearance, or the role of families and ethical considerations. Finally, SARs potential to enhance accessibility and inclusivity of multi-cultural pre-tertiary classroom is almost unexplored.
Article
Full-text available
The focus of the current chapter is on humanoid robots as part of an inclusive education. It presents a brief overview of the main features of cyber physical systems which could be used as an advantage with children with special educational needs. Based on the specifics of the main types of special educational needs, a list of suggestions about the practical implications of educational robots to the classroom has been generated. A pilot study of the perception and attitude of children and teachers in a local Bulgarian school towards the application of cyber physical systems in education has been conducted. Based on previous research and the fundings of the pilot study, a few gaps of knowledge have been identified. First, the lack of empirical work on the application of technology to subjects, such as biology, chemistry, history, or to the development of social skills and creativity. Second, the scarce evidence of the long-term effects of interventions with children with special educational needs. Third, the lack of research on the attitudes of teachers with and without special educational needs children in the class towards educational robots. Last, but not least, the need for comparison of the perceptions and expectations of users of such technology across cultures.
Article
Full-text available
Prior research has demonstrated the importance of children's peers for their learning and development. In particular, peer interaction, especially with more advanced peers, can enhance preschool children's language growth. In this paper, we explore one factor that may modulate children's language learning with a peer-like social robot: rapport. We explore connections between preschool children's learning, rapport, and emulation of the robot's language during a storytelling intervention. We performed a long-term field study in a preschool with 17 children aged 4–6 years. Children played a storytelling game with a social robot for 8 sessions over two months. For some children, the robot matched the level of its stories to the children's language ability, acting as a slightly more advanced peer (Matched condition); for the others, the robot did not match the story level (Unmatched condition). We examined children's use of target vocabulary words and key phrases used by the robot, children's emulation of the robot's stories during their own storytelling, and children's language style matching (LSM—a measure of overlap in function word use and speaking style associated with rapport and relationship) to see whether they mirrored the robot more over time. We found that not only did children emulate the robot more over time, but also, children who emulated more of the robot's phrases during storytelling scored higher on the vocabulary posttest. Children with higher LSM scores were more likely to emulate the robot's content words in their stories. Furthermore, the robot's personalization in the Matched condition led to increases in both children's emulation and their LSM scores. Together, these results suggest first, that interacting with a more advanced peer is beneficial for children, and second, that children's emulation of the robot's language may be related to their rapport and their learning. This is the first study to empirically support that rapport may be a modulating factor in children's peer learning, and furthermore, that a social robot can serve as an effective intervention for language development by leveraging this insight.
Conference Paper
Full-text available
Robots are gradually but steadily being introduced in our daily lives. A paramount application is that of education, where robots can assume the role of a tutor, a peer or simply a tool to help learners in a specific knowledge domain. Such endeavor posits specific challenges: affective social behavior, proper modelling of the learner’s progress, discrimination of the learner’s utterances, expressions and mental states, which, in turn, require an integrated architecture combining perception, cognition and action. In this paper we present an attempt to improve the current state of robots in the educational domain by introducing the EASEL EU project. Specifically, we introduce the EASEL’s unified robot architecture, an innovative Synthetic Tutor Assistant (STA) whose goal is to interactively guide learners in a science-based learning paradigm, allowing us to achieve such rich multimodal interactions.
Conference Paper
Full-text available
The field of Human-Robot Interaction (HRI) is increasingly exploring the use of social robots for educating children. Commonly, non-academic audiences will ask how robots compare to humans in terms of learning outcomes. This question is also interesting for social roboticists as humans are often assumed to be an upper benchmark for social behaviour, which influences learning. This paper presents a study in which learning gains of children are compared when taught the same mathematics material by a robot tutor and a non-expert human tutor. Significant learning occurs in both conditions, but the children improve more with the human tutor. This difference is not statistically significant, but the effect sizes fall in line with findings from other literature showing that humans outperform technology for tutoring. We discuss these findings in the context of applying social robots in child education.
Article
Full-text available
This study investigates whether the presence of a social robot and interaction with it raises children’s interest in science. We placed Robovie, our social robot, in an elementary school science class where children could freely interact with it during their breaks. Robovie was tele-operated and its behaviors were designed to answer any questions related to science. It encouraged the children to ask about science by initiating conversations about class topics. Our result shows that even though Robovie did not influence the science curiosity of the entire class, there were individual increases in the children who asked Robovie science questions.
Conference Paper
Full-text available
Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.
Conference Paper
Full-text available
This paper presents a study that compares a humanoid robotic tutor to a human tutor when instructing school children to build a LEGO house. A total of 27 students, between the ages of 11-15, divided into two groups, participated in the study and data were collected to investigate the participants' success rate, requests for help, engagement, and attitude change toward robots following the experiment. The results reveal that both groups are equally successful in executing the task. However, students ask the human tutor more often for help, while students working with the robotic tutor are more eager to perform well on the task. Finally, all students get a more positive attitude toward a robotic tutor following the experiment, but those in the robot condition change their attitude somewhat more for certain questions, illustrating the importance of real interaction experiences prior to eliciting students' attitudes toward robots. The paper concludes that students do follow instructions from a robotic tutor but that more long-term interaction is necessary to study lasting effects.
Article
Full-text available
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.
Article
Full-text available
For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.
Article
Full-text available
Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovie's turn at a game and, against Robovie's stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovie's moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) children's social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.
Conference Paper
Full-text available
We report results of a study in which a low cost sociable robot was immersed at an Early Childhood Education Center for a period of 2 weeks. The study was designed to investigate whether the robot, which operated fully autonomously during the intervention period, could improve target vocabulary skills of 18- 24 month of age toddlers. The results showed a 27 % improvement in knowledge of the target words taught by the robot when compared to a matched set of control words. The results suggest that sociable robots may be an effective and low cost technology to enrich Early Childhood Education environments.
Article
Full-text available
Humans and objects, and thus social interactions about objects, exist within space. Words direct listeners' attention to specific regions of space. Thus, a strong correspondence exists between where one looks, one's bodily orientation, and what one sees. This leads to further correspondence with what one remembers. Here, we present data suggesting that children use associations between space and objects and space and words to link words and objects--space binds labels to their referents. We tested this claim in four experiments, showing that the spatial consistency of where objects are presented affects children's word learning. Next, we demonstrate that a process model that grounds word learning in the known neural dynamics of spatial attention, spatial memory, and associative learning can capture the suite of results reported here. This model also predicts that space is special, a prediction supported in a fifth experiment that shows children do not use color as a cue to bind words and objects. In a final experiment, we ask whether spatial consistency affects word learning in naturalistic word learning contexts. Children of parents who spontaneously keep objects in a consistent spatial location during naming interactions learn words more effectively. Together, the model and data show that space is a powerful tool that can effectively ground word learning in social contexts.
Article
Full-text available
Children can learn aspects of the meaning of a new word on the basis of only a few incidental exposures and can retain this knowledge for a long period-a process dubbed 'fast mapping". It is often maintained that fast mapping is the result of a dedicated language mechanism, but it is possible that this same capacity might apply in domains other than language learning. Here we present two experiments in which three- and four-year-old children and adults were taught a novel name and a novel fact about an object, and were tested on their retention immediately, after a 1-week delay or after a 1-month delay. Our findings show that fast mapping is not limited to word learning, suggesting that the capacity to learn and retain new words is the result of learning and memory abilities that are not specific to language.
Conference Paper
This paper explores children’s social engagement to a robotic tutor by analyzing their behavioral reactions to socially significant events initiated by the robot. Specific questions addressed in this paper are whether children express signs of social engagement as a reaction to such events, and if so, in what way. The second question is whether these reactions differ between different types of social events, and finally, whether such reactions disappear or change over time. Our analysis indicates that children indeed show behaviors that indicate social engagement using a range of communicative channels. While gaze towards the robot’s face is the most common indication for all types of social events, verbal expressions and nods are especially common for questions, and smiles are most common after positive feedback. Although social responses in general decrease slightly over time, they are still observable after three sessions with the robot.
Conference Paper
Children's oral language skills in preschool can predict their success in reading, writing, and academics in later schooling. Helping children improve their language skills early on could lead to more children succeeding later. As such, we examined the potential of a sociable robotic learning/teaching companion to support children's early language development. In a microgenetic study, 17 children played a storytelling game with the robot eight times over a two-month period. We evaluated whether a robot that "leveled" its stories to match the child's current abilities would lead to greater learning and language improvements than a robot that was not matched. All children learned new words, created stories, and enjoyed playing. Children who played with a matched robot used more words, and more diverse words, in their stories than unmatched children. Understanding the interplay between the robot's and the children's language will inform future work on robot companions that support children's education through play.
Article
Children ranging from 3 to 5 years were introduced to two anthropomorphic robots that provided them with information about unfamiliar animals. Children treated the robots as interlocutors. They supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information. Consistent with studies of children's early sensitivity to an interlocutor's non-verbal signals, children were especially attentive and receptive to whichever robot displayed the greater non-verbal contingency. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse.
Article
Researchers studying ways in which humans and robots interact in social settings have a problem: they don't have a robot to use. There is a need for a socially expressive robot that can be deployed outside of a laboratory and support remote operation and data collection. This work aims to fill that need with DragonBot - a platform for social robotics specifically designed for long-term interactions. This thesis is divided into two parts. The first part describes the design and implementation of the hardware, software, and aesthetics of the DragonBot-based characters. Through the use of a mobile phone as the robot's primary computational device, we aim to drive down the hardware cost and increase the availability of robots "in the wild". The second part of this work takes an initial step towards evaluating DragonBot's effectiveness through interactions with children. We describe two different teleoperation interfaces for allowing a human to control DragonBot's behavior differing amounts of autonomy by the robot. A human subject study was conducted and these interfaces were compared through a sticker sharing task between the robot and children aged four to seven. Our results show that when a human operator is able to focus on the social portions of an interaction and the robot is given more autonomy, children treat the character more like a peer. This is indicated by the fact that more children re-engaged the robot with the higher level of autonomy when they were asked to split up stickers between the two participants.
Article
This thesis proposes an approach to language learning for preschool aged children using social robots as conversation partners within a shared play context for children and their families. It addresses an underserved age for language learning, where early learning can greatly impact later educational success, but that cannot benefit from text-based interventions. With the goal of establishing a shared physical context between multiple participants without absorbing all of the children's focus onto digital content, a hybrid physical and digital interface was iteratively designed and play-tested. This interface took the form of a "café table" on which the child and robot could share food. A robot was programmed to introduce itself and name foods in French, eat some foods and express dislike towards others, respond with distress to a new object, show its focus of attention through gaze, and in one experimental condition, express feedback about its comprehension when spoken to in French or English. The study found that some children as young as 3 years old would treat a social robot as an agent capable of understanding them and of perceiving a shared physical context, and would spontaneously modify their use of language and gesture in order to communicate with it - particularly when the robot communicated confusion. The study also found that parents tended to frame their scaffolding of the children's behavior with the robot in a social context, and without prompting aligned their guidance and reinforcement with language learning goals. After one exposure to the robot and new French vocabulary, children did not retain the robot's utterances, but engaged in communicative and social behaviors and language mimicry throughout the interaction. The system appeared to support multi-user social participation, including both caretakers and siblings of the participants.
Article
Children's oral language skills in preschool can predict their academic success later in life. Increasing children's skills early on could improve their success in middle and high school. To this end, I examined the potential of a sociable robotic learning/teaching companion in supplementing children's early language education. The robot was designed as a social character, engaging children as a peer, not as a teacher, within a relational, dialogic context. The robot targeted the social, interactive nature of language learning through a storytelling game, mediated by a tablet, that the robot and child played together. During the game, the robot introduced new vocabulary words and modeled good story narration skills. In a microgenetic study, 17 children played the storytelling game with the robot eight times each over a two month period. With half the children, the robot adapted its level of language to the child's level - so that, as children improved their storytelling skills, so did the robot. The other half played with a robot that did not adapt. I evaluated whether this adaptation influenced (i) whether children learned new words from the robot, (ii) the complexity and style of stories children told, and (iii) the similarity of children's stories to the robot's stories. I expected that children would learn more from a robot that adapted, and that they would copy its stories and narration style more than they would with a robot that did not adapt. Children's language use was tracked across sessions. I found that children in the adaptive condition maintained or increased the amount and diversity of the language they used during interactions with the robot. While children in all conditions learned new vocabulary words, created new stories during the game, and enjoyed playing with the robot, children who played with the adaptive robot improved more than children who played with the non-adaptive robot. Understanding how the robot influences children's language, and how a robot could support language development will inform the design of future learning/teaching companions that engage children as peers in educational play.
Article
In contrast to conventional teaching agents (including robots) that were designed to play the role of human teachers or caregivers, we propose the opposite scenario in which robots receive instruction or care from children. We hypothesize that by using this care-receiving robot, we may construct a new educational framework whose goal is to promote children's spontaneous learning by teaching through their teaching the robot. In this paper, we describe the introduction of a care-receiving robot into a classroom at an English language school for Japanese children (3–6 years of age) and then conduct an experiment to evaluate if the care-receiving robot can promote their learning using English verbs. The results suggest that the idea of a care-receiving robot is feasible and that the robot can help children learn new English verbs efficiently. In addition, we report on investigations into several forms of teaching performed by children, which were revealed through observations of the children, parent interviews, and other useful knowledge. These can be used to improve the design of care-receiving robots for educational purposes.
Article
Research from the past two decades indicates that preschool is a critical time for children's oral language and vocabulary development, which in turn is a primary predictor of later academic success. However, given the inherently social nature of language learning, it is difficult to develop scalable interventions for young children. Here, we present one solution in the form of robotic learning companions, using the DragonBot platform. Designed as interactive, social characters, these robots combine the flexibility and personalization afforded by educational software with a crucial social context, as peers and conversation partners. They can supplement teachers and caregivers, allowing remote operation as well as the potential for autonomously participating with children in language learning activities. Our aim is to demonstrate the efficacy of the DragonBot platform as an engaging, social, learning companion.
Article
Children learn about the world from the testimony of other people, often coming to accept what they are told about a variety of unobservable and indeed counter-intuitive phenomena. However, research on children's learning from testimony has paid limited attention to the foundations of that capacity. We ask whether those foundations can be observed in infancy. We review evidence from two areas of research: infants' sensitivity to the emotional expressions of other people; and their capacity to understand the exchange of information through non-verbal gestures and vocalization. We conclude that a grasp of the bi-directional exchange of information is present early in the second year. We discuss the implications for future research, especially across different cultural settings.
Article
Systematic observations of affiliative interaction in 15 stable peer groups were conducted across 3 years in an urban day-care center. These groups contained 193 French-speaking children (98 girls, 95 boys) ranging in age from 1 to 6 years. Cross-sectional analyses were conducted to assess the impact of age and sex on the rate of social activity and the degree of sexual segregation. Analysis of variance revealed that rate of affiliative activity increased as a linear function of age. Older children exhibited stronger preference for same-sex social partners than younger children, and a significant age X sex interaction showed that girls began to prefer same-sex peers earlier than boys, who subsequently surpassed girls in sexual discrimination. Trend analyses revealed different functions for boys and girls in the development of same-sex preferences. The utility of a 2-process model for understanding sex differences in social development and peer socialization is discussed.
Article
Adopting a procedure developed with human speakers, we examined infants' ability to follow a nonhuman agent's gaze direction and subsequently to use its gaze to learn new words. When a programmable robot acted as the speaker (Experiment 1), infants followed its gaze toward the word referent whether or not it coincided with their own focus of attention, but failed to learn a new word. When the speaker was human, infants correctly mapped the words (Experiment 2). Furthermore, when the robot interacted contingently, this did not facilitate infants' word mapping (Experiment 3). These findings suggest that gaze following upon hearing a novel word is not sufficient to learn the referent of the word when the speaker is nonhuman.
Article
Four experiments explored the processes that bridge between referent selection and word learning. Twenty-four-month-old infants were presented with several novel names during a referent selection task that included both familiar and novel objects and tested for retention after a 5-min delay. The 5-min delay ensured that word learning was based on retrieval from long-term memory. Moreover, the relative familiarity of objects used during the retention test was explicitly controlled. Across experiments, infants were excellent at referent selection, but very poor at retention. Although the highly controlled retention test was clearly challenging, infants were able to demonstrate retention of the first 4 novel names presented in the session when referent selection was augmented with ostensive naming. These results suggest that fast mapping is robust for reference selection but might be more transient than previously reported for lexical retention. The relations between reference selection and retention are discussed in terms of competitive processes on 2 timescales: competition among objects on individual referent selection trials and competition among multiple novel name–object mappings made across an experimental session.
Article
This research examines whether infants actively contribute to the achievement of joint reference. One possibility is that infants tend to link a label with whichever object they are focused on when they hear the label. If so, infants would make a mapping error when an adult labels a different object than the one occupying their focus. Alternatively, infants may be able to use a speaker's nonverbal cues (e.g., line of regard) to interpret the reference of novel labels. This ability would allow infants to avoid errors when adult labels conflict with infants' focus. 64 16–19-month-olds were taught new labels for novel toys in 2 situations. In follow-in labeling, the experimenter looked at and labeled a toy at which infants were already looking. In discrepant labeling, the experimenter looked at and labeled a different toy than the one occupying infants' focus. Infants' responses to subsequent comprehension questions revealed that they (a) successfully learned the labels introduced during follow-in labeling, and (b) displayed no tendency to make mapping errors after discrepant labeling. Thus infants of only 16 to 19 months understand that a speaker's nonverbal cues are relevant to the reference of object labels; they already can contribute to the social coordination involved in achieving joint reference.
Article
Infants as young as 12 months readily modulate their behavior toward novel, ambiguous objects based on emotional responses that others display. Such social-referencing skill offers powerful benefits to infants' knowledge acquisition, but the magnitude of these benefits depends on whether they appreciate the referential quality of others' emotional messages, and are skilled at using cues to reference (e.g., gaze direction, body posture) to guide their interpretation of such messages. Two studies demonstrated referential understanding in 12- and 18-month-olds' responses to another's emotional outburst. Infants relied on the presence versus absence of referential cues to determine whether an emotional message should be linked with a salient, novel object in the first study (N= 48), and they actively consulted referential cues to disambiguate the intended target of an affective display in the second study (N= 32). These findings provide the first experimental evidence of such sophisticated referential abilities in 12-month-olds, as well as the first evidence that infant social referencing at any age actually trades on referential understanding.
Article
Gaze following is a key component of human social cognition. Gaze following directs attention to areas of high information value and accelerates social, causal, and cultural learning. An issue for both robotic and infant learning is whose gaze to follow. The hypothesis tested in this study is that infants use information derived from an entity's interactions with other agents as evidence about whether that entity is a perceiver. A robot was programmed so that it could engage in communicative, imitative exchanges with an adult experimenter. Infants who saw the robot act in this social-communicative fashion were more likely to follow its line of regard than those without such experience. Infants use prior experience with the robot's interactions as evidence that the robot is a psychological agent that can see. Infants want to look at what the robot is seeing, and thus shift their visual attention to the external target.
Article
This research examines whether infants actively contribute to the achievement of joint reference. One possibility is that infants tend to link a a label with whichever object they are focused on when they hear the label. If so, infants would make a mapping error when an adult labels a different object than the one occupying their focus. Alternatively, infants may be able to use a speaker's nonverbal cues (e.g., line of regard) to interpret the reference of novel labels. This ability would allow infants to avoid errors when adult labels conflict with infants' focus. 64 16-19-month-olds were taught new labels for novel toys in 2 situations. In follow-in labeling, the experimenter looked at and labeled a toy at which infants were already looking. In discrepant labeling, the experimenter looked at and labeled a different toy than the one occupying infants' focus. Infants' responses to subsequent comprehension questions revealed that they (a) successfully learned the labels introduced during follow-in labeling, and (b) displayed no tendency to make mapping errors after discrepant labeling. Thus infants of only 16 to 19 months understand that a speaker's nonverbal cues are relevant to the reference of object labels; they already can contribute to the social coordination involved in achieving joint reference.
Article
This research examines whether infants actively seek information from a speaker regarding the referent of the speaker's utterance. Forty-eight infants (in three age groups: 1;2-1;3, 1;4-1;5, and 1;6-1;7) heard novel labels for novel objects in two situations: follow-in labelling (the experimenter looked at and labelled the toy of the infant's focus) vs. discrepant labelling (the experimenter looked at and labelled a different toy than that of the infant's focus). Subsequently, half of the infants were asked comprehension questions (e.g. 'Where's the peri?'). The other half were asked preference questions (e.g. 'Where's the one you like?'), to ensure that their comprehension performance was not merely the result of preferential responding. The comprehension results revealed developmental change in both (a) infants' ability to establish new word-object mappings (infants aged 1;2-1;3 failed to establish stable word-object links even in follow-in labelling), and (b) infants' ability to pinpoint the correct referent during discrepant labelling (only infants aged 1;6-1;7 succeeded). Thus the period between 1;2 and 1;7 represents a time of change in infants' ability to establish new word-object mappings: infants are becoming increasingly adept at acquiring new labels under minimal learning conditions.
Article
Four studies investigated whether and when infants connect information about an actor's affect and perception to their action. Arguably, this may be a crucial way in which infants come to recognize the intentional behaviors of others. In Study 1 an actor grasped one of two objects in a situation where cues from the actor's gaze and expression could serve to determine which object would be grasped, specifically the actor first looked at and emoted positively about one object but not the other. Twelve-month-olds, but not 8-month-olds, recognized that the actor was likely to grasp the object which she had visually regarded with positive affect. Studies 2, 3, and 4 replicated the main finding from Study 1 with 12- and 14-month-olds and included several contrasting conditions and controls. These studies provide evidence that the ability to use information about an adult's direction of gaze and emotional expression to predict action is both present, and developing at the end of the first year of life.
Article
I advance the hypothesis that the earliest phases of language acquisition -- the developmental transition from an initial universal state of language processing to one that is language-specific -- requires social interaction. Relating human language learning to a broader set of neurobiological cases of communicative development, I argue that the social brain 'gates' the computational mechanisms involved in human language learning.
Towards a synthetic tutor assistant: The EASEL project and its architecture
  • V. Vouloutsi
  • M. Blancas
  • R. Zucca
  • P. Omedas
  • D. Reidsma
  • D. Davison
  • V. Charisi
  • F. Wijnen
  • J. van der Meij
  • V. Evers
  • D. Cameron
  • S. Fernando
  • R. Moore
  • T. Prescott
  • D. Mazzei
  • M. Pieroni
  • L. Cominelli
  • R. Garofalo
  • D.D. Rossi
  • P.F.M.J. Verschure
Television as incidental language teacher
  • Naigles
Naigles, L.R., Mayeux, L., 2001. Television as incidental language teacher. Handb. Child. Media 135-152.
Sociable robot improves toddler vocabulary skills
  • J. Movellan
  • M. Eckhardt
  • M. Virnes
  • A. Rodriguez
The interplay of robot language level with children’s language learning during storytelling
  • J. Kory Westlund
  • C. Breazeal
  • Y Demiris
  • R Ros-Espinoza
  • A Beck
  • L Cañamero
  • A Hiolle
  • M Lewis
  • I Baroni
  • M Nalin
  • P Cosi
  • G Paci
  • F Tesser
  • G Sommavilla
  • R Humbert
Demiris, Y., Ros-Espinoza, R., Beck, A., Cañamero, L., Hiolle, A., Lewis, M., Baroni, I., Nalin, M., Cosi, P., Paci, G., Tesser, F., Sommavilla, G., Humbert, R., 2012. Multimodal Child-Robot Interaction: Building Social Bonds. J. Hum.-Robot Interact. 1, 33-53.
Robotic learning companions for early language development
  • J M Kory
  • S Jeong
  • C L Breazeal
Kory, J.M., Jeong, S., Breazeal, C.L., 2013. Robotic learning companions for early language development, in: In J. Epps, F. Chen, S. Oviatt, & K. Mase (Eds.), Proceedings of the 15th ACM on International Conference on Multimodal Interaction. ACM, New York, NY: ACM, pp. 71-72.